The search functionality is under construction.
The search functionality is under construction.

Author Search Result

[Author] Takayuki NAKACHI(25hit)

1-20hit(25hit)

  • An MMT-Based Hierarchical Transmission Module for 4K/120fps Temporally Scalable Video

    Yasuhiro MOCHIDA  Takayuki NAKACHI  Takahiro YAMAGUCHI  

     
    PAPER

      Pubricized:
    2020/06/22
      Vol:
    E103-D No:10
      Page(s):
    2059-2066

    High frame rate (HFR) video is attracting strong interest since it is considered as a next step toward providing Ultra-High Definition video service. For instance, the Association of Radio Industries and Businesses (ARIB) standard, the latest broadcasting standard in Japan, defines a 120 fps broadcasting format. The standard stipulates temporally scalable coding and hierarchical transmission by MPEG Media Transport (MMT), in which the base layer and the enhancement layer are transmitted over different paths for flexible distribution. We have developed the first ever MMT transmitter/receiver module for 4K/120fps temporally scalable video. The module is equipped with a newly proposed encapsulation method of temporally scalable bitstreams with correct boundaries. It is also designed to be tolerant to severe network constraints, including packet loss, arrival timing offset, and delay jitter. We conducted a hierarchical transmission experiment for 4K/120fps temporally scalable video. The experiment demonstrated that the MMT module was successfully fabricated and capable of dealing with severe network constraints. Consequently, the module has excellent potential as a means to support HFR video distribution in various network situations.

  • A 2-D Adaptive Joint-Process Lattice Estimator for Image Restoration

    Takayuki NAKACHI  Katsumi YAMASHITA  Nozomu HAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E80-A No:1
      Page(s):
    140-147

    The present paper examines a two-dimensional (2-D) joint-process lattice estimator and its implementation for image restoration. The gradient adaptive lattice (GAL) algorithm is used to update the filter coefficients. The proposed adaptive lattice estimator can represent a wider class of 2-D FIR systems than the conventional 2-D lattice models. Furthermore, its structure possesses orthogonality between the backward prediction errors. These results in superior convergence and tracking properties versus the transversal and other 2-D adaptive lattice estimators. The validity of the proposed model for image restoration is evaluated through computer simulations. In the examples, the implementation of the proposed lattice estimator as 2-D adaptive noise cancellator (ANC) and 2-D adaptive line enhancer (ALE) is considered.

  • Two-Dimensional Least Squares Lattice Algorithm for Linear Prediction

    Takayuki NAKACHI  Katsumi YAMASHITA  Nozomu HAMADA  

     
    LETTER-Digital Signal Processing

      Vol:
    E80-A No:11
      Page(s):
    2325-2329

    In this paper, we propose a two-dimensional (2-D) least-squares lattice (LSL) algorithm for the general case of the autoregressive (AR) model with an asymmetric half-plane (AHP) coefficient support. The resulting LSL algorithm gives both order and space recursions for the 2-D deterministic normal equations. The size and shape of the coefficient support region of the proposed lattice filter can be chosen arbitrarily. Furthermore, the ordering of the support signal can be assigned arbitrarily. Finally, computer simulation for modeling a texture image is demonstrated to confirm the proposed model gives rapid convergence.

  • FOREWORD

    Takayuki NAKACHI  Makoto NAKASHIZUKA  

     
    FOREWORD

      Vol:
    E99-A No:11
      Page(s):
    1907-1908
  • Lossless and Near-Lossless Color Image Coding Using Edge Adaptive Quantization

    Takayuki NAKACHI  Tatsuya FUJII  

     
    PAPER-Coding Theory

      Vol:
    E84-A No:4
      Page(s):
    1064-1073

    This paper proposes a unified coding algorithm for the lossless and near-lossless compression of still color images. The proposed unified color image coding scheme can control the Peak Signal-to-Noise Ratio (PSNR) of the reconstructed image while the level of distortion on the RGB plane is suppressed to within a preset magnitude. In order to control the PSNR, the distortion level is adaptively changed at each pixel. An adaptive quantizer to control the distortion is designed on the basis of psychovisual criteria. Finally, experiments on Super High Definition (SHD) images show the effectiveness of the proposed algorithm.

  • Extended Multiresolution Lossless Video Coding Using In-Band Spatio-Temporal Prediction

    Takayuki NAKACHI  Tomoko SAWABE  Tetsuro FUJII  

     
    PAPER-Image/Vision Processing

      Vol:
    E89-A No:3
      Page(s):
    698-707

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. A progressive transmission scheme is also proposed for effective resolution scalability. Experiments on test sequences confirm the effectiveness of the proposed algorithm.

  • Two Types of Adaptive Beamformer Using 2-D Joint Process Lattice Estimator

    Tateo YAMAOKA  Takayuki NAKACHI  Nozomu HAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E81-A No:1
      Page(s):
    117-122

    This paper presents two types of two-dimensional (2-D) adaptive beamforming algorithm which have high rate of convergence. One is a linearly constrained minimum variance (LCMV) beamforming algorithm which minimizes the average output power of a beamformer, and the other is a generalized sidelobe canceler (GSC) algorithm which generalizes the notion of a linear constraint by using the multiple linear constraints. In both algorithms, we apply a 2-D lattice filter to an adaptive filtering since the 2-D lattice filter provides excellent properties compared to a transversal filter. In order to evaluate the validity of the algorithm, we perform computer simulations. The experimental results show that the algorithm can reject interference signals while maintaining the direction of desired signal, and can improve convergent performance.

  • Block Estimation Method for Two-Dimensional Adaptive Lattice Filter

    InHwan KIM  Takayuki NAKACHI  Nozomu HAMADA  

     
    PAPER-Digital Signal Processing

      Vol:
    E80-A No:4
      Page(s):
    737-744

    In the adaptive lattice estimation process, it is well known that the convergence speed of the successive stage is affected by the estimation errors of reflection coefficients in its preceding stages. In this paper, we propose block estimation methods of two-dimensional (2-D) adaptive lattice filter. The convergence speed of the proposed algorithm is significantly enhanced by improving the adaptive performance of preceding stages. Furthermore, this process can be simply realized. The modeling of 2-D AR field and texture image are demonstrated through computer simulations.

  • A Study on Non-octave Scalable Image Coding and Its Performance Evaluation Using Digital Cinema Test Material

    Takayuki NAKACHI  Tomoko SAWABE  Junji SUZUKI  Tetsuro FUJII  

     
    PAPER-Image

      Vol:
    E89-A No:9
      Page(s):
    2405-2414

    JPEG2000, an international standard for still image compression, offers 1) high coding performance, 2) unified lossless/lossy compression, and 3) resolution and SNR scalability. Resolution scalability is an especially promising attribute given the popularity of Super High Definition (SHD) images like digital-cinema. Unfortunately, its current implementation of resolution scalability is restricted to powers of two. In this paper, we introduce non-octave scalable coding (NSC) based on the use of filter banks. Two types of non-octave scalable coding are implemented. One is based on a DCT filter bank and the other uses wavelet transform. The latter is compatible with JPEG2000 Part2. By using the proposed algorithm, images with rational scale resolutions can be decoded from a compressed bit stream. Experiments on digital cinema test material show the effectiveness of the proposed algorithm.

  • A Design Method of an Adaptive Multichannel IIR Lattice Predictor for k-Step Ahead Prediction

    Katsumi YAMASHITA  M. H. KAHAI  Takayuki NAKACHI  Hayao MIYAGI  

     
    LETTER-Adaptive Signal Processing

      Vol:
    E76-A No:8
      Page(s):
    1350-1352

    An adaptive multichannel IIR lattice predictor for k-step ahead prediction is constructed and the effectiveness of the proposed predictor is evaluated using digital simulations.

  • Network Traffic Anomaly Detection: A Revisiting to Gaussian Process and Sparse Representation

    Yitu WANG  Takayuki NAKACHI  

     
    PAPER-Communication Theory and Signals

      Pubricized:
    2023/06/27
      Vol:
    E107-A No:1
      Page(s):
    125-133

    Seen from the Internet Service Provider (ISP) side, network traffic monitoring is an indispensable part during network service provisioning, which facilitates maintaining the security and reliability of the communication networks. Among the numerous traffic conditions, we should pay extra attention to traffic anomaly, which significantly affects the network performance. With the advancement of Machine Learning (ML), data-driven traffic anomaly detection algorithms have established high reputation due to the high accuracy and generality. However, they are faced with challenges on inefficient traffic feature extraction and high computational complexity, especially when taking the evolving property of traffic process into consideration. In this paper, we proposed an online learning framework for traffic anomaly detection by embracing Gaussian Process (GP) and Sparse Representation (SR) in two steps: 1). To extract traffic features from past records, and better understand these features, we adopt GP with a special kernel, i.e., mixture of Gaussian in the spectral domain, which makes it possible to more accurately model the network traffic for improving the performance of traffic anomaly detection. 2). To combat noise and modeling error, observing the inherent self-similarity and periodicity properties of network traffic, we manually design a feature vector, based on which SR is adopted to perform robust binary classification. Finally, we demonstrate the superiority of the proposed framework in terms of detection accuracy through simulation.

  • Secure Overcomplete Dictionary Learning for Sparse Representation

    Takayuki NAKACHI  Yukihiro BANDOH  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2019/10/09
      Vol:
    E103-D No:1
      Page(s):
    50-58

    In this paper, we propose secure dictionary learning based on a random unitary transform for sparse representation. Currently, edge cloud computing is spreading to many application fields including services that use sparse coding. This situation raises many new privacy concerns. Edge cloud computing poses several serious issues for end users, such as unauthorized use and leak of data, and privacy failures. The proposed scheme provides practical MOD and K-SVD dictionary learning algorithms that allow computation on encrypted signals. We prove, theoretically, that the proposal has exactly the same dictionary learning estimation performance as the non-encrypted variant of MOD and K-SVD algorithms. We apply it to secure image modeling based on an image patch model. Finally, we demonstrate its performance on synthetic data and a secure image modeling application for natural images.

  • Secure OMP Computation Maintaining Sparse Representations and Its Application to EtC Systems

    Takayuki NAKACHI  Hitoshi KIYA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2020/06/22
      Vol:
    E103-D No:9
      Page(s):
    1988-1997

    In this paper, we propose a secure computation of sparse coding and its application to Encryption-then-Compression (EtC) systems. The proposed scheme introduces secure sparse coding that allows computation of an Orthogonal Matching Pursuit (OMP) algorithm in an encrypted domain. We prove theoretically that the proposed method estimates exactly the same sparse representations that the OMP algorithm for non-encrypted computation does. This means that there is no degradation of the sparse representation performance. Furthermore, the proposed method can control the sparsity without decoding the encrypted signals. Next, we propose an EtC system based on the secure sparse coding. The proposed secure EtC system can protect the private information of the original image contents while performing image compression. It provides the same rate-distortion performance as that of sparse coding without encryption, as demonstrated on both synthetic data and natural images.

  • 2-D Adaptive Autoregressive Modeling Using New Lattice Structure

    Takayuki NAKACHI  Katsumi YAMASHITA  Nozomu HAMADA  

     
    PAPER

      Vol:
    E79-A No:8
      Page(s):
    1145-1150

    The present paper investigates a two-dimensional (2-D) adaptive lattice filter used for modeling 2-D AR fields. The 2-D least mean square (LMS) lattice algorithm is used to update the filter coefficients. The proposed adaptive lattice filter can represent a wider class of 2-D AR fields than previous ones. Furthremore, its structure is also shown to possess orthogonality in the backward prediction error fields. These result in superior convergence and tracking properties to the adaptive transversal filter and other adaptive 2-D lattice models. Then, the convergence property of the proposed adaptive LMS lattice algorithm is discussed. The effectiveness of the proposed model is evaluated for parameter identification through computer simulation.

  • Layered Multicast Encryption of Motion JPEG2000 Code Streams for Flexible Access Control

    Takayuki NAKACHI  Kan TOYOSHIMA  Yoshihide TONOMURA  Tatsuya FUJII  

     
    PAPER-Video Processing

      Vol:
    E95-D No:5
      Page(s):
    1301-1312

    In this paper, we propose a layered multicast encryption scheme that provides flexible access control to motion JPEG2000 code streams. JPEG2000 generates layered code streams and offers flexible scalability in characteristics such as resolution and SNR. The layered multicast encryption proposal allows a sender to multicast the encrypted JPEG2000 code streams such that only designated groups of users can decrypt the layered code streams. While keeping the layering functionality, the proposed method offers useful properties such as 1) video quality control using only one private key, 2) guaranteed security, and 3) low computational complexity comparable to conventional non-layered encryption. Simulation results show the usefulness of the proposed method.

  • Layered Low-Density Generator Matrix Codes for Super High Definition Scalable Video Coding System

    Yoshihide TONOMURA  Daisuke SHIRAI  Takayuki NAKACHI  Tatsuya FUJII  Hitoshi KIYA  

     
    PAPER

      Vol:
    E92-A No:3
      Page(s):
    798-807

    In this paper, we introduce layered low-density generator matrix (Layered-LDGM) codes for super high definition (SHD) scalable video systems. The layered-LDGM codes maintain the correspondence relationship of each layer from the encoder side to the decoder side. This resulting structure supports partial decoding. Furthermore, the proposed layered-LDGM codes create highly efficient forward error correcting (FEC) data by considering the relationship between each scalable component. Therefore, the proposed layered-LDGM codes raise the probability of restoring the important components. Simulations show that the proposed layered-LDGM codes offer better error resiliency than the existing method which creates FEC data for each scalable component independently. The proposed layered-LDGM codes support partial decoding and raise the probability of restoring the base component. These characteristics are very suitable for scalable video coding systems.

  • A Unified Coding Algorithm of Lossless and Near-Lossless Color Image Compression

    Takayuki NAKACHI  Tatsuya FUJII  Junji SUZUKI  

     
    PAPER

      Vol:
    E83-A No:2
      Page(s):
    301-310

    This paper describes a unified coding algorithm for lossless and near-lossless color image compression that exploits the correlations between RGB signals. A reversible color transform that removes the correlations between RGB signals while avoiding any finite word length limitation is proposed for the lossless case. The resulting algorithm gives higher performance than the lossless JPEG without the color transform. Next, the lossless algorithm is extended to a unified coding algorithm of lossless and near-lossless compression schemes that can control the level of the reconstruction error on the RGB plane from 0 to p, where p is a certain small non-negative integer. The effectiveness of this algorithm was demonstrated experimentally.

  • Privacy-Preserving Support Vector Machine Computing Using Random Unitary Transformation

    Takahiro MAEKAWA  Ayana KAWAMURA  Takayuki NAKACHI  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E102-A No:12
      Page(s):
    1849-1855

    A privacy-preserving support vector machine (SVM) computing scheme is proposed in this paper. Cloud computing has been spreading in many fields. However, the cloud computing has some serious issues for end users, such as the unauthorized use of cloud services, data leaks, and privacy being compromised. Accordingly, we consider privacy-preserving SVM computing. We focus on protecting visual information of images by using a random unitary transformation. Some properties of the protected images are discussed. The proposed scheme enables us not only to protect images, but also to have the same performance as that of unprotected images even when using typical kernel functions such as the linear kernel, radial basis function (RBF) kernel and polynomial kernel. Moreover, it can be directly carried out by using well-known SVM algorithms, without preparing any algorithms specialized for secure SVM computing. In an experiment, the proposed scheme is applied to a face-based authentication algorithm with SVM classifiers to confirm the effectiveness.

  • Distributed Video Coding Using JPEG 2000 Coding Scheme

    Yoshihide TONOMURA  Takayuki NAKACHI  Tetsuro FUJII  

     
    PAPER-Image

      Vol:
    E90-A No:3
      Page(s):
    581-589

    Distributed Video Coding (DVC), based on the theorems proposed by Slepian-Wolf and Wyner-Ziv, is attracting attention as a new paradigm for video compression. Some of the DVC systems use intra-frame compression based on discrete cosine transform (DCT). Unfortunately, conventional DVC systems have low affinity with DCT. In this paper, we propose a wavelet-based DVC scheme that utilizs current JPEG 2000 standard. Accordingly, the scheme has scalability with regard to resolution and quality. In addition, we propose two methods to increase the coding gain of the new DVC scheme. One is the introduction of a Gray code, and the other method involves optimum quantization. An interesting point is that though our proposed method uses Gray code, it still achieves quality scalability. Tests confirmed that the PSNR is increased about 5 [dB] by the two methods, and the PSNR of the new scheme (with methods) is about 1.5-3 [dB] higher than that of conventional JPEG 2000.

  • Parallel Processing of Distributed Video Coding to Reduce Decoding Time

    Yoshihide TONOMURA  Takayuki NAKACHI  Tatsuya FUJII  Hitoshi KIYA  

     
    PAPER-Image Coding and Processing

      Vol:
    E92-A No:10
      Page(s):
    2463-2470

    This paper proposes a parallelized DVC framework that treats each bitplane independently to reduce the decoding time. Unfortunately, simple parallelization generates inaccurate bit probabilities because additional side information is not available for the decoding of subsequent bitplanes, which degrades encoding efficiency. Our solution is an effective estimation method that can calculate the bit probability as accurately as possible by index assignment without recourse to side information. Moreover, we improve the coding performance of Rate-Adaptive LDPC (RA-LDPC), which is used in the parallelized DVC framework. This proposal selects a fitting sparse matrix for each bitplane according to the syndrome rate estimation results at the encoder side. Simulations show that our parallelization method reduces the decoding time by up to 35[%] and achieves a bit rate reduction of about 10[%].

1-20hit(25hit)